Goto

Collaborating Authors

 debiased mdi feature importance measure


A Debiased MDI Feature Importance Measure for Random Forests

Neural Information Processing Systems

Tree ensembles such as Random Forests have achieved impressive empirical success across a wide variety of applications. To understand how these models make predictions, people routinely turn to feature importance measures calculated from tree ensembles. It has long been known that Mean Decrease Impurity (MDI), one of the most widely used measures of feature importance, incorrectly assigns high importance to noisy features, leading to systematic bias in feature selection. In this paper, we address the feature selection bias of MDI from both theoretical and methodological perspectives. Based on the original definition of MDI by Breiman et al. \cite{Breiman1984} for a single tree, we derive a tight non-asymptotic bound on the expected bias of MDI importance of noisy features, showing that deep trees have higher (expected) feature selection bias than shallow ones. However, it is not clear how to reduce the bias of MDI using its existing analytical expression. We derive a new analytical expression for MDI, and based on this new expression, we are able to propose a debiased MDI feature importance measure using out-of-bag samples, called MDI-oob. For both the simulated data and a genomic ChIP dataset, MDI-oob achieves state-of-the-art performance in feature selection from Random Forests for both deep and shallow trees.


Reviews: A Debiased MDI Feature Importance Measure for Random Forests

Neural Information Processing Systems

I am updating my score from 7 to 8. --- # Originality The main contributions are all original. While the take-home message of the study is in retrospect simple and obvious ( compute MDI importances on out-of-bag samples), the paper provides an original analysis that explains and justifies this modification of the computation of MDI importances. Some remarks however: - I would have appreciated a controlled experiment where G0(T) can be computed exactly in order to empirically appreciate the (supposed) tightness of the bound. More specifically, what if A1 and A2 are not satisfied? In real-word setups, A1 is very unlikely to hold.


Reviews: A Debiased MDI Feature Importance Measure for Random Forests

Neural Information Processing Systems

The paper studies theoretically the bias of the popular MDI importance measures in the presence of noisy features and proposes a very simple practical solution to reduce it. Two reviewers are very enthusiastic about the paper, even more so after reading the authors' response. One reviewer has several valid concerns about missing links between theory and practice but still recommends acceptance. I therefore recommend accepting the paper. The author are asked to take into account the reviewers comments when preparing the final version of their paper and, in particular, to address the specific request of reviewer 2 (to clarify how MDI-oob is computed).


A Debiased MDI Feature Importance Measure for Random Forests

Neural Information Processing Systems

Tree ensembles such as Random Forests have achieved impressive empirical success across a wide variety of applications. To understand how these models make predictions, people routinely turn to feature importance measures calculated from tree ensembles. It has long been known that Mean Decrease Impurity (MDI), one of the most widely used measures of feature importance, incorrectly assigns high importance to noisy features, leading to systematic bias in feature selection. In this paper, we address the feature selection bias of MDI from both theoretical and methodological perspectives. Based on the original definition of MDI by Breiman et al. \cite{Breiman1984} for a single tree, we derive a tight non-asymptotic bound on the expected bias of MDI importance of noisy features, showing that deep trees have higher (expected) feature selection bias than shallow ones.


A Debiased MDI Feature Importance Measure for Random Forests

Li, Xiao, Wang, Yu, Basu, Sumanta, Kumbier, Karl, Yu, Bin

Neural Information Processing Systems

Tree ensembles such as Random Forests have achieved impressive empirical success across a wide variety of applications. To understand how these models make predictions, people routinely turn to feature importance measures calculated from tree ensembles. It has long been known that Mean Decrease Impurity (MDI), one of the most widely used measures of feature importance, incorrectly assigns high importance to noisy features, leading to systematic bias in feature selection. In this paper, we address the feature selection bias of MDI from both theoretical and methodological perspectives. Based on the original definition of MDI by Breiman et al. \cite{Breiman1984} for a single tree, we derive a tight non-asymptotic bound on the expected bias of MDI importance of noisy features, showing that deep trees have higher (expected) feature selection bias than shallow ones.